148 research outputs found

    Generative-Discriminative Complementary Learning

    Get PDF
    Majority of state-of-the-art deep learning methods are discriminative approaches, which model the conditional distribution of labels given inputs features. The success of such approaches heavily depends on high-quality labeled instances, which are not easy to obtain, especially as the number of candidate classes increases. In this paper, we study the complementary learning problem. Unlike ordinary labels, complementary labels are easy to obtain because an annotator only needs to provide a yes/no answer to a randomly chosen candidate class for each instance. We propose a generative-discriminative complementary learning method that estimates the ordinary labels by modeling both the conditional (discriminative) and instance (generative) distributions. Our method, we call Complementary Conditional GAN (CCGAN), improves the accuracy of predicting ordinary labels and can generate high-quality instances in spite of weak supervision. In addition to the extensive empirical studies, we also theoretically show that our model can retrieve the true conditional distribution from the complementarily-labeled data

    Effective Drusen Localization for Early AMD Screening using Sparse Multiple Instance Learning

    Get PDF
    Age-related Macular Degeneration (AMD) is one of the leading causes of blindness. Automatic screening of AMD has attracted much research effort in recent years because it brings benefits to both patients and ophthalmologists. Drusen is an important clinical indicator for AMD in its early stage. Accurately detecting and localizing drusen are important for AMD detection and grading. In this paper, we propose an effective approach to localize drusen in fundus images. This approach trains a drusen classifier from a weakly labeled dataset, i.e., only the existence of drusen is known but not the exact locations or boundaries, by employing Multiple Instance Learning (MIL). Specifically, considering the sparsity of drusen in fundus images, we employ sparse Multiple Instance Learning to obtain better performance compared with classical MIL. Experiments on 350 fundus images with 96 having AMD demonstrates that on the task of AMD detection, multiple instance learning, both classical and sparse versions, achieve comparable performance compared with fully supervised SVM. On the task of drusen localization, sparse MIL outperforms MIL significantly

    MedSyn: Text-guided Anatomy-aware Synthesis of High-Fidelity 3D CT Images

    Full text link
    This paper introduces an innovative methodology for producing high-quality 3D lung CT images guided by textual information. While diffusion-based generative models are increasingly used in medical imaging, current state-of-the-art approaches are limited to low-resolution outputs and underutilize radiology reports' abundant information. The radiology reports can enhance the generation process by providing additional guidance and offering fine-grained control over the synthesis of images. Nevertheless, expanding text-guided generation to high-resolution 3D images poses significant memory and anatomical detail-preserving challenges. Addressing the memory issue, we introduce a hierarchical scheme that uses a modified UNet architecture. We start by synthesizing low-resolution images conditioned on the text, serving as a foundation for subsequent generators for complete volumetric data. To ensure the anatomical plausibility of the generated samples, we provide further guidance by generating vascular, airway, and lobular segmentation masks in conjunction with the CT images. The model demonstrates the capability to use textual input and segmentation tasks to generate synthesized images. The results of comparative assessments indicate that our approach exhibits superior performance compared to the most advanced models based on GAN and diffusion techniques, especially in accurately retaining crucial anatomical features such as fissure lines, airways, and vascular structures. This innovation introduces novel possibilities. This study focuses on two main objectives: (1) the development of a method for creating images based on textual prompts and anatomical components, and (2) the capability to generate new images conditioning on anatomical elements. The advancements in image generation can be applied to enhance numerous downstream tasks

    Learning to screen Glaucoma like the ophthalmologists

    Full text link
    GAMMA Challenge is organized to encourage the AI models to screen the glaucoma from a combination of 2D fundus image and 3D optical coherence tomography volume, like the ophthalmologists
    • …
    corecore